13 research outputs found
MAD: Modality Agnostic Distance Measure for Image Registration
Multi-modal image registration is a crucial pre-processing step in many
medical applications. However, it is a challenging task due to the complex
intensity relationships between different imaging modalities, which can result
in large discrepancy in image appearance. The success of multi-modal image
registration, whether it is conventional or learning based, is predicated upon
the choice of an appropriate distance (or similarity) measure. Particularly,
deep learning registration algorithms lack in accuracy or even fail completely
when attempting to register data from an "unseen" modality. In this work, we
present Modality Agnostic Distance (MAD), a deep image distance}] measure that
utilises random convolutions to learn the inherent geometry of the images while
being robust to large appearance changes. Random convolutions are
geometry-preserving modules which we use to simulate an infinite number of
synthetic modalities alleviating the need for aligned paired data during
training. We can therefore train MAD on a mono-modal dataset and successfully
apply it to a multi-modal dataset. We demonstrate that not only can MAD
affinely register multi-modal images successfully, but it has also a larger
capture range than traditional measures such as Mutual Information and
Normalised Gradient Fields
Generative Modelling of the Ageing Heart with Cross-Sectional Imaging and Clinical Data
Cardiovascular disease, the leading cause of death globally, is an
age-related disease. Understanding the morphological and functional changes of
the heart during ageing is a key scientific question, the answer to which will
help us define important risk factors of cardiovascular disease and monitor
disease progression. In this work, we propose a novel conditional generative
model to describe the changes of 3D anatomy of the heart during ageing. The
proposed model is flexible and allows integration of multiple clinical factors
(e.g. age, gender) into the generating process. We train the model on a
large-scale cross-sectional dataset of cardiac anatomies and evaluate on both
cross-sectional and longitudinal datasets. The model demonstrates excellent
performance in predicting the longitudinal evolution of the ageing heart and
modelling its data distribution. The codes are available at
https://github.com/MengyunQ/AgeHeart
Learn2Reg: comprehensive multi-task medical image registration challenge, dataset and evaluation in the era of deep learning
Image registration is a fundamental medical image analysis task, and a wide
variety of approaches have been proposed. However, only a few studies have
comprehensively compared medical image registration approaches on a wide range
of clinically relevant tasks. This limits the development of registration
methods, the adoption of research advances into practice, and a fair benchmark
across competing approaches. The Learn2Reg challenge addresses these
limitations by providing a multi-task medical image registration data set for
comprehensive characterisation of deformable registration algorithms. A
continuous evaluation will be possible at
https://learn2reg.grand-challenge.org. Learn2Reg covers a wide range of
anatomies (brain, abdomen, and thorax), modalities (ultrasound, CT, MR),
availability of annotations, as well as intra- and inter-patient registration
evaluation. We established an easily accessible framework for training and
validation of 3D registration methods, which enabled the compilation of results
of over 65 individual method submissions from more than 20 unique teams. We
used a complementary set of metrics, including robustness, accuracy,
plausibility, and runtime, enabling unique insight into the current
state-of-the-art of medical image registration. This paper describes datasets,
tasks, evaluation methods and results of the challenge, as well as results of
further analysis of transferability to new datasets, the importance of label
supervision, and resulting bias. While no single approach worked best across
all tasks, many methodological aspects could be identified that push the
performance of medical image registration to new state-of-the-art performance.
Furthermore, we demystified the common belief that conventional registration
methods have to be much slower than deep-learning-based methods
Deformable medical image registration using Deep Learning
Medical image registration involves aligning two or more images of the same subject or different subjects with various imaging modalities. It is a fundamental task in medical image analysis, with applications in disease monitoring, treatment planning, motion tracking, population analysis and many more. However, it is a challenging task due to a variety of factors, such as differences in imaging modalities, anatomical variations, and deformation caused by organ motion or surgical intervention. Deep Learning (DL) has shown great promise in addressing some of the challenges in medical image registration. Advanced deep neural networks are able to learn complex features from data and make predictions of non-linear (deformable) transformations from input images to solve registration problems with significantly higher computational efficiency than non-DL registration methods. The works presented in this thesis focus on investigating and improving aspects of deformable DL registration. First, a study was conducted to investigate the effectiveness of supervised and unsupervised training of DL registration for cardiac motion using Magnetic Resonance Imaging (MRI) images. Second, a DL registration method is proposed which uses a differentiable Mutual Information (MI) loss and diffeomorphic free-form deformation (FFD), enabling accurate and well-regularised registration of medical images in different modalities. Finally, an accurate, data-efficient and robust DL registration method is developed by embedding variational optimisation in the learning-based framework.Open Acces